11 research outputs found

    Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning

    Full text link
    In this work, we introduce panoramic panoptic segmentation, as the most holistic scene understanding, both in terms of Field of View (FoV) and image-level understanding for standard camera-based input. A complete surrounding understanding provides a maximum of information to a mobile agent. This is essential information for any intelligent vehicle to make informed decisions in a safety-critical dynamic environment such as real-world traffic. In order to overcome the lack of annotated panoramic images, we propose a framework which allows model training on standard pinhole images and transfers the learned features to the panoramic domain in a cost-minimizing way. The domain shift from pinhole to panoramic images is non-trivial as large objects and surfaces are heavily distorted close to the image border regions and look different across the two domains. Using our proposed method with dense contrastive learning, we manage to achieve significant improvements over a non-adapted approach. Depending on the efficient panoptic segmentation architecture, we can improve 3.5-6.5% measured in Panoptic Quality (PQ) over non-adapted models on our established Wild Panoramic Panoptic Segmentation (WildPPS) dataset. Furthermore, our efficient framework does not need access to the images of the target domain, making it a feasible domain generalization approach suitable for a limited hardware setting. As additional contributions, we publish WildPPS: The first panoramic panoptic image dataset to foster progress in surrounding perception and explore a novel training procedure combining supervised and contrastive training.Comment: Accepted to IEEE Transactions on Intelligent Transportation Systems (T-ITS). Extended version of arXiv:2103.00868. The project is at https://github.com/alexanderjaus/PP

    Heavy to Light Meson Exclusive Semileptonic Decays in Effective Field Theory of Heavy Quark

    Full text link
    We present a general study on exclusive semileptonic decays of heavy (B, D, B_s) to light (pi, rho, K, K^*) mesons in the framework of effective field theory of heavy quark. Transition matrix elements of these decays can be systematically characterized by a set of wave functions which are independent of the heavy quark mass except for the implicit scale dependence. Form factors for all these decays are calculated consistently within the effective theory framework using the light cone sum rule method at the leading order of 1/m_Q expansion. The branching ratios of these decays are evaluated, and the heavy and light flavor symmetry breaking effects are investigated. We also give comparison of our results and the predictions from other approaches, among which are the relations proposed recently in the framework of large energy effective theory.Comment: 18 pages, ReVtex, 5 figures, added references and comparison of results, and corrected signs in some formula

    Use and Misuse of QCD Sum Rules in Heavy-to-light Transitions: the Decay BρeνB\to\rho e \nu Reexamined

    Full text link
    The existing calculations of the form factors describing the decay BρeνB\to\rho e \nu from QCD sum rules have yielded conflicting results at small values of the invariant mass squared of the lepton pair. We demonstrate that the disagreement originates from the failure of the short-distance expansion to describe the ρ\rho meson distribution amplitude in the region where almost the whole momentum is carried by one of the constituents. This limits the applicability of QCD sum rules based on the short-distance expansion of a three-point correlation function to heavy-to-light transitions and calls for an expansion around the light-cone, as realized in the light-cone sum rule approach. We derive and update light-cone sum rules for all the semileptonic form factors, using recent results on the ρ\rho meson distribution amplitudes. The results are presented in detail together with a careful analysis of the uncertainties, including estimates of higher-twist effects, and compared to lattice calculations and recent CLEO measurements. We also derive a set of ``improved'' three-point sum rules, in which some of the problems of the short-distance expansion are avoided and whose results agree to good accuracy with those from light-cone sum rules.Comment: 34 pages Latex; two references added; one typo in one table corrected; accepted for publication in Phys. Rev.

    PROSPECTS FOR B-PHYSICS IN THE NEXT DECADE

    Get PDF
    In these lectures I review what has been learned from studies of b-quark decays, including semileptonic decays (Vub and Vcb), B o −B o mixing and rare B decays. Then a discussion on CP violation follows, which leads to a summary of plans for future experiments and what is expected to be learned from them

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    No full text
    16 pagesPrior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbac

    Analytical Separation of Enantiomers by Gas Chromatography on Chiral Stationary Phases

    No full text

    Cookstove implementation and Education for Sustainable Development: A review of the field and proposed research agenda

    No full text

    Physics of neutrinos

    No full text

    The Analytical Separation of Enantiomers by Gas Chromatography on Chiral Stationary Phases

    No full text
    corecore